15,251 research outputs found

    Information Recovery from Pairwise Measurements

    Full text link
    A variety of information processing tasks in practice involve recovering nn objects from single-shot graph-based measurements, particularly those taken over the edges of some measurement graph G\mathcal{G}. This paper concerns the situation where each object takes value over a group of MM different values, and where one is interested to recover all these values based on observations of certain pairwise relations over G\mathcal{G}. The imperfection of measurements presents two major challenges for information recovery: 1) inaccuracy\textit{inaccuracy}: a (dominant) portion 1−p1-p of measurements are corrupted; 2) incompleteness\textit{incompleteness}: a significant fraction of pairs are unobservable, i.e. G\mathcal{G} can be highly sparse. Under a natural random outlier model, we characterize the minimax recovery rate\textit{minimax recovery rate}, that is, the critical threshold of non-corruption rate pp below which exact information recovery is infeasible. This accommodates a very general class of pairwise relations. For various homogeneous random graph models (e.g. Erdos Renyi random graphs, random geometric graphs, small world graphs), the minimax recovery rate depends almost exclusively on the edge sparsity of the measurement graph G\mathcal{G} irrespective of other graphical metrics. This fundamental limit decays with the group size MM at a square root rate before entering a connectivity-limited regime. Under the Erdos Renyi random graph, a tractable combinatorial algorithm is proposed to approach the limit for large MM (M=nΩ(1)M=n^{\Omega(1)}), while order-optimal recovery is enabled by semidefinite programs in the small MM regime. The extended (and most updated) version of this work can be found at (http://arxiv.org/abs/1504.01369).Comment: This version is no longer updated -- please find the latest version at (arXiv:1504.01369

    A review of mentorship measurement tools

    Get PDF
    © 2016 Elsevier Ltd. Objectives: To review mentorship measurement tools in various fields to inform nursing educators on selection, application, and developing of mentoring instruments. Design: A literature review informed by PRISMA 2009 guidelines. Data Sources: Six databases: CINHAL, Medline, PsycINFO, Academic Search Premier, ERIC, Business premier resource. Review Methods: Search terms and strategies used: mentor* N3 (behav* or skill? or role? or activit? or function* or relation*) and (scale or tool or instrument or questionnaire or inventory). The time limiter was set from January 1985 to June 2015. Extracted data were content of instruments, samples, psychometrics, theoretical framework, and utility. An integrative review method was used. Results: Twenty-eight papers linked to 22 scales were located, seven from business and industry, 11 from education, 3 from health science, and 1 focused on research mentoring. Mentorship measurement was pioneered by business with a universally accepted theoretical framework, i.e. career function and psychosocial function, and the trend of scale development is developing: from focusing on the positive side of mentorship shifting to negative mentoring experiences and challenges. Nursing educators mainly used instruments from business to assess mentorship among nursing teachers. In education and nursing, measurement has taken to a more specialised focus: researchers in different contexts have developed scales to measure different specific aspects of mentorship. Most tools show psychometric evidence of content homogeneity and construct validity but lack more comprehensive and advanced tests. Conclusion: Mentorship is widely used and conceptualised differently in different fields and is less mature in nursing than in business. Measurement of mentorship is heading to a more specialised and comprehensive process. Business and education provided measurement tools to nursing educators to assess mentorship among staff, but a robust instrument to measure nursing students' mentorship is needed

    On the Minimax Capacity Loss under Sub-Nyquist Universal Sampling

    Full text link
    This paper investigates the information rate loss in analog channels when the sampler is designed to operate independent of the instantaneous channel occupancy. Specifically, a multiband linear time-invariant Gaussian channel under universal sub-Nyquist sampling is considered. The entire channel bandwidth is divided into nn subbands of equal bandwidth. At each time only kk constant-gain subbands are active, where the instantaneous subband occupancy is not known at the receiver and the sampler. We study the information loss through a capacity loss metric, that is, the capacity gap caused by the lack of instantaneous subband occupancy information. We characterize the minimax capacity loss for the entire sub-Nyquist rate regime, provided that the number nn of subbands and the SNR are both large. The minimax limits depend almost solely on the band sparsity factor and the undersampling factor, modulo some residual terms that vanish as nn and SNR grow. Our results highlight the power of randomized sampling methods (i.e. the samplers that consist of random periodic modulation and low-pass filters), which are able to approach the minimax capacity loss with exponentially high probability.Comment: accepted to IEEE Transactions on Information Theory. It has been presented in part at the IEEE International Symposium on Information Theory (ISIT) 201

    Journey to the Center of the Fuzzball

    Full text link
    We study two-charge fuzzball geometries, with attention to the use of the proper duality frame. For zero angular momentum there is an onion-like structure, and the smooth D1-D5 geometries are not valid for typical states. Rather, they are best approximated by geometries with stringy sources, or by a free CFT. For non-zero angular momentum we find a regime where smooth fuzzball solutions are the correct description. Our analysis rests on the comparison of three radii: the typical fuzzball radius, the entropy radius determined by the microscopic theory, and the breakdown radius where the curvature becomes large. We attempt to draw more general lessons.Comment: 22 pages, 1 figur

    Channel Capacity under Sub-Nyquist Nonuniform Sampling

    Full text link
    This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which include irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while typically complicated to realize, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.Comment: accepted to IEEE Transactions on Information Theory, 201

    Channel Capacity under General Nonuniform Sampling

    Full text link
    This paper develops the fundamental capacity limits of a sampled analog channel under a sub-Nyquist sampling rate constraint. In particular, we derive the capacity of sampled analog channels over a general class of time-preserving sampling methods including irregular nonuniform sampling. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest SNR among all spectral sets of support size equal to the sampling rate. The capacity under sub-Nyquist sampling can be attained through filter-bank sampling, or through a single branch of modulation and filtering followed by uniform sampling. The capacity under sub-Nyquist sampling is a monotone function of the sampling rate. These results indicate that the optimal sampling schemes suppress aliasing, and that employing irregular nonuniform sampling does not provide capacity gain over uniform sampling sets with appropriate preprocessing for a large class of channels.Comment: 5 pages, to appear in IEEE International Symposium on Information Theory (ISIT), 201

    Darcy law for yield stress fluid

    Full text link
    Predicting the flow of non-Newtonian fluids in porous structure is still a challenging issue due to the interplay betwen the microscopic disorder and the non-linear rheology. In this letter, we study the case of an yield stress fluid in a two-dimensional structure. Thanks to a performant optimization algorithm, we show that the system undergoes a continuous phase transition in the behavior of the flow controlled by the applied pressure drop. In analogy with the studies of the plastic depinning of vortex lattices in high-TcT_c superconductors we characterize the nonlinearity of the flow curve and relate it to the change in the geometry of the open channels. In particular, close to the transition, an universal scale free distribution of the channel length is observed and explained theoretically via a mapping to the KPZ equation.Comment: 5 pages, 4 figures + 1 Supplementary materia
    • …
    corecore